ChatGPT fooled US lawyer, faces sanction
The AI chatbot has fooled a lawyer into believing that citations given by ChatGPT in a case against Colombian airline Avianca were real while they were, in fact, bogus
image for illustrative purpose
San Francisco: ChatGPT has fooled a lawyer into believing that citations given by the AI chatbot in a case against Colombian airline Avianca were real while they were, in fact, bogus.
Lawyer Steven A Schwartz, representing a man who sued an airline, admitted in an affidavit that he had used OpenAI's chatbot for his research, reports The New York Times. After the opposing counsel pointed out the non-existent cases, US District Judge Kevin Castel confirmed that six of the submitted cases "appear to be bogus judicial decisions with bogus quotes and bogus internal citations".
The judge has now set up a hearing as he considers sanctions for the plaintiff's lawyers. According to Schwartz, he did ask the chatbot if it was lying. When the lawyer asked for a source, ChatGPT went on to apologise for earlier confusion and insisted the case was real.
ChatGPT also maintained that the other cases it cited were all real. Schwartz said he was "unaware of the possibility that its content could be false." He "greatly regrets having utilised generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity."
Last month, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.
Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone. "ChatGPT recently issued a false story accusing me of sexually assaulting students," Turkey posted in a tweet.